successor representation
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.48)
- North America > Canada > Quebec > Montreal (0.14)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.92)
- Education > Educational Setting (0.45)
- Health & Medicine > Therapeutic Area > Neurology (0.45)
- North America > Canada (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > Japan > Honshū > Chūbu > Toyama Prefecture > Toyama (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Quebec > Montreal (0.14)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.92)
- Education > Educational Setting (0.45)
- Health & Medicine > Therapeutic Area > Neurology (0.45)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Beyond representing the position of an animal in a given environment, the activity of neurons in the hippocampus (areas CA1,3) is known to be influenced by a range of task-dependent factors, for instance the presence of a reward at a specific location in the environment; yet we don't fully understand how these representations emerge and what they are good for. The present paper proposes that these observations are a reflection of the circuit implementing a specific algorithm (using a successor representation, SR, initially proposed by Dayan in 1993) for learning state values for reinforcement learning; moreover it suggests that the representation in an upstream region (medial EC) may provide a basis for a hierarchical decomposition of space. Overall, some of the ideas put forward here are intriguing and potentially interesting for theoretical neuroscientists studying hippocampal coding, however the link to the neural data is relatively weak and the presentation of the material is difficult to follow in places. Detailed comments 1. Content: Since the algorithmic part of the paper is not new, the key contribution of this work is the link between the SR representation and the activity of neurons in the hippocampus. Unfortunately, this link between the two is not clear in several aspects: a) it is never spelled out exactly how does the matrix M relate to the firing of the neurons in the corresponding hippocampal circuit.If there is a one-to-one map between firing rates and M(s,s'), how can a downstream circuit compute V(s)?
- North America > United States > Texas > Kleberg County (0.04)
- North America > United States > Texas > Chambers County (0.04)
- North America > Canada > Quebec > Montreal (0.04)
Attractor Network Dynamics Enable Preplay and Rapid Path Planning in Maze–like Environments
Dane S. Corneil, Wulfram Gerstner
Rodents navigating in a well-known environment can rapidly learn and revisit observed reward locations, often after a single trial. While the mechanism for rapid path planning is unknown, the CA3 region in the hippocampus plays an important role, and emerging evidence suggests that place cell activity during hippocam-pal "preplay" periods may trace out future goal-directed trajectories. Here, we show how a particular mapping of space allows for the immediate generation of trajectories between arbitrary start and goal locations in an environment, based only on the mapped representation of the goal. We show that this representation can be implemented in a neural attractor network model, resulting in bump-like activity profiles resembling those of the CA3 region of hippocampus. Neurons tend to locally excite neurons with similar place field centers, while inhibiting other neurons with distant place field centers, such that stable bumps of activity can form at arbitrary locations in the environment. The network is initialized to represent a point in the environment, then weakly stimulated with an input corresponding to an arbitrary goal location. We show that the resulting activity can be interpreted as a gradient ascent on the value function induced by a reward at the goal location. Indeed, in networks with large place fields, we show that the network properties cause the bump to move smoothly from its initial location to the goal, around obstacles or walls. Our results illustrate that an attractor network with hippocampal-like attributes may be important for rapid path planning.
- North America > United States (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.05)